| Parameter | Value | Variable |
|---|---|---|
| Series Code | 21864 | ts |
| Maximum Lag | 48 | lag.max |
| Prevision Horizon | 12 | n.ahead |
library(BETS)
data = BETS.get(code)
info <- BETS.search(code = ts, view = F)
| Code | Description | Periodicity | Start | Source | Unit | NA |
|---|---|---|---|---|---|---|
| 21864 | Physical Production - Intermediate goods | Index | 2002.1 | 01/01/2002 | feb/2017 | IBGE |
## library(mFilter)
trend = fitted(hpfilter(data))
library(dygraphs)
dygraph(cbind(Series = data, Trend = trend), main = info[,"Description"]) %>%
dyRangeSelector(strokeColor = "gray", fillColor = "gray") %>%
dyAxis("y", label = info[,"Unit"])
df = BETS.ur_test(y = diff(data), type = "drift", lags = 11, selectlags = "BIC", level = "5pct")
df$results
## statistic crit.val rej.H0
## tau2 -3.204720 -2.88 yes
## phi1 5.135789 4.63 no
For a 95% confidence interval, the test statistic tau3 is smaller than the critical value. We therefore conclude that there is no unit root.
This test will be performed for lag 12, that is, the frequency of the series 21864.
library(forecast)
s_roots = nsdiffs(data)
print(s_roots)
## [1] 0
According to the OCSB test, there is no seasonal unit root, at least at a 5% significance level.
BETS.corrgram(data, lag.max = lag.max, mode = "bartlett", knit = T)
BETS.corrgram(data, lag.max = lag.max, mode = "simple", type = "partial", knit = T)
d_ts = data
The correlograms from last section gives us enough information to try to identify the underlying SARIMA model parameters. We can confirm our guess by running the auto.arima function from the package forecast. By default, this function uses the AICc (Akaike Information Criterion with Finite Sample Correction) for model selection. Here, we are going to use BIC (Bayesian Information Criterion), in which the penalty term for the number of parameters in the model is larger than in AIC.
model = auto.arima(data, ic = "bic")
summary(model)
## Series: data
## ARIMA(1,1,0)(1,0,0)[12]
##
## Coefficients:
## ar1 sar1
## -0.2362 0.8364
## s.e. 0.0721 0.0353
##
## sigma^2 estimated as 8.124: log likelihood=-455.11
## AIC=916.23 AICc=916.36 BIC=925.84
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set -0.005083287 2.826849 2.160359 -0.05500061 2.303063 0.489624
## ACF1
## Training set 0.02514418
We see that, according to BIC, the best model is a ARIMA(1,1,0)(1,0,0)[12] .
BETS.predict(model,h=n.ahead, main = info[,"Description"], ylab = info[,"Unit"], knit = T)